-
Notifications
You must be signed in to change notification settings - Fork 724
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds example on Hydra integration for running environments #67
Conversation
Signed-off-by: Mayank Mittal <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool to incorporate a hydra example! I am a "power user" of hydra and was very happy to see the dataclass approach Orbit is providing (rather than the yaml hierarchies we see in OIGE).
I have a lot of other thoughts about how to best use hydra in this setting, but I get that this PR is about showing a simple example.
If there's interest, I would be happy to provide a slightly more sophisticated example as a followup that integrates hydra and uses the orbit envs / rlgames. Let me know what you think.
env_cfg = OmegaConf.to_object(env_cfg) | ||
# modify the environment configuration | ||
if env_cfg.sim.device == "cpu": | ||
env_cfg.sim.use_gpu_pipeline = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this could be done with a pretty simple resolver directly in the config like:
OmegaConf.register_new_resolver("evaluate_use_gpu", lambda device: device.startswith('cuda'))
@dataclass
class SimConfig:
device: str = "${sim_device}"
...
use_gpu_pipeline: bool = "${evaluate_use_gpu: ${task.sim.device}}"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh that's great! I am not an expert in Hydra so any tips are welcomed!
I was planning to have a version of train.py scripts with hydra for all the workflows too since it is quite convinient. I think to handle the agent configs and env configs together, I would need to do some sort of config composing.
Regarding the suggestion, that makes sense but I was wondering if there's a way to keep Hydra only as a thin layer and not embed hydra resolvers internally.
In general, the practice we are trying to follow is that people should be able to use other config management systems too, i.e. we are not binding to a particular solution. Though I can see how this will have some trade-offs since we won't be able to exploit the tool.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I completely agree with preserving the ability to use other config management systems. In my workflow, I tend to write dataclasses which are compatible across different configuration systems (both as schema and as direct config structures).
I try to avoid too much "config logic" within the main code base to also avoid:
1.) being locked in to that particular tool
2.) contributing a higher signal-to-noise ratio where config parsing is definitely noise haha
Let me think a bit on whether there's another way to do something like this without using the OmegaConf resolver. One possible route is through a __post_init__
call.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Completely with you on removing this piece of "resolving" on the main script side. Probably the post_init call can be used. Optionally, we don't do either of these and just make it a documentation thing that: for cpu/gpu implementation you need to set those particular arguments in the config.
It will make life easier because, from the simulation side, we do sometimes debug by having different use_gpu
(PhysX buffers) and use_gpu_pipeline
(staging buffers) flags. What this means is that using "CPU" for simulation (use_gpu=False) but reading its data on GPU (use_gpu_pipeline=True). Though this is a rather niche thing that many won't need.
Do you want to start another Github issue (proposal) to discuss recommendations that you have for using Hydra? Probably easier to keep track of this discussion than having it in the PR itself :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that makes sense! I'll setup a github issue for tracking other hydra-related stuff!
The explicit setting + documentation is probably the right move. However, it can be handy to have "opinionated and sane" default behavior for people to quickly jump in when they don't fully understand all of the configurable options yet, that is resolved behind the scenes e.g. __post_init__
.
e6e7e2f
to
f2d97bd
Compare
3e7a470
to
cfcabba
Compare
@Mayankm96 example: hydra config main.py omni.isaac.lab.sim.GroundPlaneCfg.py
and the post_init code can solve this problem. |
I got a way to solve tuple type from hydra. |
Closing this in favor of #700 |
Description
This PR adds a method to register the environment configuration instance as a Hydra configuration object. Doing so allows configuring a bunch of environment parameters from the command line. It finally gets around the main motivation to use
configclass
decorator.Fixes #57
Type of change
Screenshots
Please attach before and after screenshots of the change if applicable.
Checklist
pre-commit
checks with./orbit.sh --format
config/extension.toml
file